Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc.
at Exusia

About Exusia
About
Connect with the team
Similar jobs
- Design and develop a framework, internal tools, and scripts for testing large-scale data systems, machine learning algorithms, and responsive User Interfaces.
- Create repeatability in testing through automation
- Participate in code reviews, design reviews, architecture discussions.
- Performance testing and benchmarking of Bidgely product suites
- Driving the adoption of these best practices around coding, design, quality, performance in your team.
- Lead the team on all technical aspects and own the quality of your teams’ deliverables
- Understand requirements, design exhaustive test scenarios, execute manual and automated test cases, dig deeper into issues, identify root causes, and articulate defects clearly.
- Strive for excellence in quality by looking beyond obvious scenarios and stated requirements and by keeping end-user needs in mind.
- Debug automation, product, deployment, and production issues and work with stakeholders/team on quick resolution
- Deliver a high-quality robust product in a fast-paced start-up environment.
- Collaborate with the engineering team and product management to elicit & understand their requirements and develop potential solutions.
- Stay current with the latest technology, tools, and methodologies; share knowledge by clearly articulating results and ideas to key decision-makers.
Requirements
- BS/MS in Computer Science, Electrical or equivalent
- 6+ years of experience in designing automation frameworks, tools
- Strong object-oriented design skills, knowledge of design patterns, and an uncanny ability to
design intuitive module and class-level interfaces - Deep understanding of design patterns, optimizations
- Experience leading multi-engineer projects and mentoring junior engineers
- Good understanding of data structures and algorithms and their space and time complexities. Strong technical aptitude and a good knowledge of CS fundamentals
- Experience in non-functional testing and performance benchmarking
- Knowledge of Test-Driven Development & implementing CD/CD
- Strong hands-on and practical working experience with at least one programming language: Java/Python/C++
- Strong analytical, problem solving, and debugging skills.
- Strong experience in API automation using Jersey/Rest Assured.
- Fluency in automation tools, frameworks such as Selenium, TestNG, Jmeter, JUnit, Jersey, etc...
- Exposure to distributed systems or web applications
- Good in RDBMS or any of the large data systems such as Hadoop, Cassandra, etc.
- Hands-on experience with build tools like Maven/Gradle & Jenkins
- Experience in testing on various browsers and devices.
- Strong communication and collaboration skills.
Data Analyst
Job Description
Summary
Are you passionate about handling large & complex data problems, want to make an impact and have the desire to work on ground-breaking big data technologies? Then we are looking for you.
At Amagi, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day-to-day basis? If so, Amagi’s Data Engineering and Business Intelligence team is looking for passionate, detail-oriented, technical savvy, energetic team members who like to think outside the box.
Amagi’s Data warehouse team deals with petabytes of data catering to a wide variety of real-time, near real-time and batch analytical solutions. These solutions are an integral part of business functions such as Sales/Revenue, Operations, Finance, Marketing and Engineering, enabling critical business decisions. Designing, developing, scaling and running these big data technologies using native technologies of AWS and GCP are a core part of our daily job.
Key Qualifications
- Experience in building highly cost optimised data analytics solutions
- Experience in designing and building dimensional data models to improve accessibility, efficiency and quality of data
- Experience (hands on) in building high quality ETL applications, data pipelines and analytics solutions ensuring data privacy and regulatory compliance.
- Experience in working with AWS or GCP
- Experience with relational and NoSQL databases
- Experience to full stack web development (Preferably Python)
- Expertise with data visualisation systems such as Tableau and Quick Sight
- Proficiency in writing advanced SQL queries with expertise in performance tuning handling large data volumes
- Familiarity with ML/AÍ technologies is a plus
- Demonstrate strong understanding of development processes and agile methodologies
- Strong analytical and communication skills. Should be self-driven, highly motivated and ability to learn quickly
Description
Data Analytics is at the core of our work, and you will have the opportunity to:
- Design Data-warehousing solutions on Amazon S3 with Athena, Redshift, GCP Bigtable etc
- Lead quick prototypes by integrating data from multiple sources
- Do advanced Business Analytics through ad-hoc SQL queries
- Work on Sales Finance reporting solutions using tableau, HTML5, React applications
We build amazing experiences and create depth in knowledge for our internal teams and our leadership. Our team is a friendly bunch of people that help each other grow and have a passion for technology, R&D, modern tools and data science.
Our work relies on deep understanding of the company needs and an ability to go through vast amounts of internal data such as sales, KPIs, forecasts, Inventory etc. One of the key expectations of this role would be to do data analytics, building data lakes, end to end reporting solutions etc. If you have a passion for cost optimised analytics and data engineering and are eager to learn advanced data analytics at a large scale, this might just be the job for you..
Education & Experience
A bachelor’s/master’s degree in Computer Science with 5 to 7 years of experience and previous experience in data engineering is a plus.
Responsibilities
- Our Site reliability engineers work on improving the availability, scalability, performance, and reliability of enterprise production services for our products as well as our customer’s data lake environments.
- You will use your expertise to improve the reliability and performance of Hadoop Data lake clusters and data management services. Just as our products, our SRE are expected to be platform and vendor-agnostic when it comes to implementing, stabilizing, and tuning Hadoop ecosystems.
- You’d be required to provide implementation guidance, best practices framework, and technical thought leadership to our customers for their Hadoop Data lake implementation and migration initiatives.
- You need to be 100% hand-on and as a required test, monitor, administer, and operate multiple Data lake clusters across data centers.
- Troubleshoot issues across the entire stack - hardware, software, application, and network.
- Dive into problems with an eye to both immediate remediations as well as the follow-through changes and automation that will prevent future occurrences.
- Must demonstrate exceptional troubleshooting and strong architectural skills and clearly and effectively describe this in both a verbal and written format.
Requirements
- Customer-focused, Self-driven, and Motivated with a strong work ethic and a passion for problem-solving.
- 4+ years of designing, implementing, tuning, and managing services in a distributed, enterprise-scale on-premise and public/private cloud environment.
- Familiarity with infrastructure management and operations lifecycle concepts and ecosystem.
- Hadoop cluster design, Implementation, management and performance tuning experience with HDFS, YARN,
- HIVE/IMPALA, SPARK, Kerberos and related Hadoop technologies are a must.
- Must have strong SQL/HQL query troubleshooting and tuning skills on Hive/HBase.
- Must have a strong capacity planning experience for Hadoop ecosystems/data lakes.
- Good to have hands-on experience with – KAFKA, RANGER/SENTRY, NiFi, Ambari, Cloudera Manager, and HBASE.
- Good to have data modeling, data engineering, and data security experience within the Hadoop ecosystem.Good to have deep JVM/Java debugging and tuning skills.
Minimum 2 years of work experience on Snowflake and Azure storage.
Minimum 3 years of development experience in ETL Tool Experience.
Strong SQL database skills in other databases like Oracle, SQL Server, DB2 and Teradata
Good to have Hadoop and Spark experience.
Good conceptual knowledge on Data-Warehouse and various methodologies.
Working knowledge in any of the scripting like UNIX / Shell
Good Presentation and communication skills.
Should be flexible with the overlapping working hours.
Should be able to work independently and be proactive.
Good understanding of Agile development cycle.






